Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 338
Filtrar
1.
Comput Methods Programs Biomed ; 251: 108198, 2024 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-38718718

RESUMO

BACKGROUND AND OBJECTIVE: This paper introduces an encoder-decoder-based attentional decoder network to recognize small-size lesions in chest X-ray images. In the encoder-only network, small-size lesions disappear during the down-sampling steps or are indistinguishable in the low-resolution feature maps. To address these issues, the proposed network processes images in the encoder-decoder architecture similar to U-Net families and classifies lesions by globally pooling high-resolution feature maps. However, two challenging obstacles prohibit U-Net families from being extended to classification: (1) the up-sampling procedure consumes considerable resources, and (2) there needs to be an effective pooling approach for the high-resolution feature maps. METHODS: Therefore, the proposed network employs a lightweight attentional decoder and harmonic magnitude transform. The attentional decoder up-samples the given features with the low-resolution features as the key and value while the high-resolution features as the query. Since multi-scaled features interact, up-sampled features embody global context at a high resolution, maintaining pathological locality. In addition, harmonic magnitude transform is devised for pooling high-resolution feature maps in the frequency domain. We borrow the shift theorem of the Fourier transform to preserve the translation invariant property and further reduce the parameters of the pooling layer by an efficient embedding strategy. RESULTS: The proposed network achieves state-of-the-art classification performance on the three public chest X-ray datasets, such as NIH, CheXpert, and MIMIC-CXR. CONCLUSIONS: In conclusion, the proposed efficient encoder-decoder network recognizes small-size lesions well in chest X-ray images by efficiently up-sampling feature maps through an attentional decoder and processing high-resolution feature maps with harmonic magnitude transform. We open-source our implementation at https://github.com/Lab-LVM/ADNet.

2.
J Imaging Inform Med ; 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38693333

RESUMO

Ischemic stroke segmentation at an acute stage is vital in assessing the severity of patients' impairment and guiding therapeutic decision-making for reperfusion. Although many deep learning studies have shown attractive performance in medical segmentation, it is difficult to use these models trained on public data with private hospitals' datasets. Here, we demonstrate an ensemble model that employs two different multimodal approaches for generalization, a more effective way to perform on external datasets. First, after we jointly train a segmentation model on diffusion-weighted imaging (DWI) and apparent diffusion coefficient (ADC) MR modalities, the model is inferred on the DWI images. Second, a channel-wise segmentation model is trained by concatenating the DWI and ADC images as input, and then is inferred using both MR modalities. Before training with ischemic stroke data, we utilized BraTS 2021, a public brain tumor dataset, for transfer learning. An extensive ablation study evaluates which strategy learns better representations for ischemic stroke segmentation. In our study, nnU-Net well-known for robustness is selected as our baseline model. Our proposed method is evaluated on three different datasets: the Asan Medical Center (AMC) I and II, and the 2022 Ischemic Stroke Lesion Segmentation (ISLES). Our experiments are widely validated over a large, multi-center, and multi-scanner dataset with a huge amount of 846 scans. Not only stroke lesion models can benefit from transfer learning using brain tumor data, but combining the MR modalities using different training schemes also highly improves segmentation performance. The method achieved a top-1 ranking in the ongoing ISLES'22 challenge and performed particularly well on lesion-wise metrics of interest to neuroradiologists, achieving a Dice coefficient of 78.69% and a lesion-wise F1 score of 82.46%. Also, the method was relatively robust on the AMC I (Dice, 60.35%; lesion-wise F1, 68.30%) and II (Dice; 74.12%; lesion-wise F1, 67.53%) datasets in different settings. The high segmentation accuracy of our proposed method could improve radiologists' ability to detect ischemic stroke lesions in MRI images. Our model weights and inference code are available on https://github.com/MDOpx/ISLES22-model-inference .

3.
Sci Rep ; 14(1): 8924, 2024 04 18.
Artigo em Inglês | MEDLINE | ID: mdl-38637613

RESUMO

Accurate measurement of abdominal aortic aneurysm is essential for selecting suitable stent-grafts to avoid complications of endovascular aneurysm repair. However, the conventional image-based measurements are inaccurate and time-consuming. We introduce the automated workflow including semantic segmentation with active learning (AL) and measurement using an application programming interface of computer-aided design. 300 patients underwent CT scans, and semantic segmentation for aorta, thrombus, calcification, and vessels was performed in 60-300 cases with AL across five stages using UNETR, SwinUNETR, and nnU-Net consisted of 2D, 3D U-Net, 2D-3D U-Net ensemble, and cascaded 3D U-Net. 7 clinical landmarks were automatically measured for 96 patients. In AL stage 5, 3D U-Net achieved the highest dice similarity coefficient (DSC) with statistically significant differences (p < 0.01) except from the 2D-3D U-Net ensemble and cascade 3D U-Net. SwinUNETR excelled in 95% Hausdorff distance (HD95) with significant differences (p < 0.01) except from UNETR and 3D U-Net. DSC of aorta and calcification were saturated at stage 1 and 4, whereas thrombus and vessels were continuously improved at stage 5. The segmentation time between the manual and AL-corrected segmentation using the best model (3D U-Net) was reduced to 9.51 ± 1.02, 2.09 ± 1.06, 1.07 ± 1.10, and 1.07 ± 0.97 min for the aorta, thrombus, calcification, and vessels, respectively (p < 0.001). All measurement and tortuosity ratio measured - 1.71 ± 6.53 mm and - 0.15 ± 0.25. We developed an automated workflow with semantic segmentation and measurement, demonstrating its efficiency compared to conventional methods.


Assuntos
Aneurisma da Aorta Abdominal , Implante de Prótese Vascular , Calcinose , Procedimentos Endovasculares , Trombose , Humanos , Aneurisma da Aorta Abdominal/diagnóstico por imagem , Aprendizagem Baseada em Problemas , Semântica , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador
4.
Sci Rep ; 14(1): 8755, 2024 04 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627477

RESUMO

In this paper, we introduce in-depth the analysis of CNNs and ViT architectures in medical images, with the goal of providing insights into subsequent research direction. In particular, the origins of deep neural networks should be explainable for medical images, but there has been a paucity of studies on such explainability in the aspect of deep neural network architectures. Therefore, we investigate the origin of model performance, which is the clue to explaining deep neural networks, focusing on the two most relevant architectures, such as CNNs and ViT. We give four analyses, including (1) robustness in a noisy environment, (2) consistency in translation invariance property, (3) visual recognition with obstructed images, and (4) acquired features from shape or texture so that we compare origins of CNNs and ViT that cause the differences of visual recognition performance. Furthermore, the discrepancies between medical and generic images are explored regarding such analyses. We discover that medical images, unlike generic ones, exhibit class-sensitive. Finally, we propose a straightforward ensemble method based on our analyses, demonstrating that our findings can help build follow-up studies. Our analysis code will be publicly available.


Assuntos
Redes Neurais de Computação
5.
J Rheum Dis ; 31(2): 97-107, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38559800

RESUMO

Objective: Ankylosing spondylitis (AS) is chronic inflammatory arthritis causing structural damage and radiographic progression to the spine due to repeated and continuous inflammation over a long period. This study establishes the application of machine learning models to predict radiographic progression in AS patients using time-series data from electronic medical records (EMRs). Methods: EMR data, including baseline characteristics, laboratory findings, drug administration, and modified Stoke AS Spine Score (mSASSS), were collected from 1,123 AS patients between January 2001 and December 2018 at a single center at the time of first (T1), second (T2), and third (T3) visits. The radiographic progression of the (n+1)th visit (Pn+1=(mSASSSn+1-mSASSSn)/(Tn+1-Tn)≥1 unit per year) was predicted using follow-up visit datasets from T1 to Tn. We used three machine learning methods (logistic regression with the least absolute shrinkage and selection operation, random forest, and extreme gradient boosting algorithms) with three-fold cross-validation. Results: The random forest model using the T1 EMR dataset best predicted the radiographic progression P2 among the machine learning models tested with a mean accuracy and area under the curves of 73.73% and 0.79, respectively. Among the T1 variables, the most important variables for predicting radiographic progression were in the order of total mSASSS, age, and alkaline phosphatase. Conclusion: Prognosis predictive models using time-series data showed reasonable performance with clinical features of the first visit dataset when predicting radiographic progression.

6.
Sci Rep ; 14(1): 7661, 2024 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-38561420

RESUMO

Complex temporal bone anatomy complicates operations; thus, surgeons must engage in practice to mitigate risks, improving patient safety and outcomes. However, existing training methods often involve prohibitive costs and ethical problems. Therefore, we developed an educational mastoidectomy simulator, considering mechanical properties using 3D printing. The mastoidectomy simulator was modeled on computed tomography images of a patient undergoing a mastoidectomy. Infill was modeled for each anatomical part to provide a realistic drilling sensation. Bone and other anatomies appear in assorted colors to enhance the simulator's educational utility. The mechanical properties of the simulator were evaluated by measuring the screw insertion torque for infill specimens and cadaveric temporal bones and investigating its usability with a five-point Likert-scale questionnaire completed by five otolaryngologists. The maximum insertion torque values of the sigmoid sinus, tegmen, and semicircular canal were 1.08 ± 0.62, 0.44 ± 0.42, and 1.54 ± 0.43 N mm, displaying similar-strength infill specimens of 40%, 30%, and 50%. Otolaryngologists evaluated the quality and usability at 4.25 ± 0.81 and 4.53 ± 0.62. The mastoidectomy simulator could provide realistic bone drilling feedback for educational mastoidectomy training while reinforcing skills and comprehension of anatomical structures.


Assuntos
Mastoidectomia , Treinamento por Simulação , Humanos , Impressão Tridimensional , Osso Temporal/cirurgia , Treinamento por Simulação/métodos
7.
Sci Rep ; 14(1): 7551, 2024 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-38555414

RESUMO

Transfer learning plays a pivotal role in addressing the paucity of data, expediting training processes, and enhancing model performance. Nonetheless, the prevailing practice of transfer learning predominantly relies on pre-trained models designed for the natural image domain, which may not be well-suited for the medical image domain in grayscale. Recognizing the significance of leveraging transfer learning in medical research, we undertook the construction of class-balanced pediatric radiograph datasets collectively referred to as PedXnets, grounded in radiographic views using the pediatric radiographs collected over 24 years at Asan Medical Center. For PedXnets pre-training, approximately 70,000 X-ray images were utilized. Three different pre-training weights of PedXnet were constructed using Inception V3 for various radiation perspective classifications: Model-PedXnet-7C, Model-PedXnet-30C, and Model-PedXnet-68C. We validated the transferability and positive effects of transfer learning of PedXnets through pediatric downstream tasks including fracture classification and bone age assessment (BAA). The evaluation of transfer learning effects through classification and regression metrics showed superior performance of Model-PedXnets in quantitative assessments. Additionally, visual analyses confirmed that the Model-PedXnets were more focused on meaningful regions of interest.


Assuntos
Aprendizado Profundo , Fraturas Ósseas , Humanos , Criança , Aprendizado de Máquina , Radiografia
8.
Sci Rep ; 14(1): 5722, 2024 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-38459159

RESUMO

Accurate lesion diagnosis through computed tomography (CT) and advances in laparoscopic or robotic surgeries have increased partial nephrectomy survival rates. However, accurately marking the kidney resection area through the laparoscope is a prevalent challenge. Therefore, we fabricated and evaluated a 4D-printed kidney surgical guide (4DP-KSG) for laparoscopic partial nephrectomies based on CT images. The kidney phantom and 4DP-KSG were designed based on CT images from a renal cell carcinoma patient. 4DP-KSG were fabricated using shape-memory polymers. 4DP-KSG was compressed to a 10 mm thickness and restored to simulate laparoscopic port passage. The Bland-Altman evaluation assessed 4DP-KSG shape and marking accuracies before compression and after restoration with three operators. The kidney phantom's shape accuracy was 0.436 ± 0.333 mm, and the 4DP-KSG's shape accuracy was 0.818 ± 0.564 mm before compression and 0.389 ± 0.243 mm after restoration, with no significant differences. The 4DP-KSG marking accuracy was 0.952 ± 0.682 mm before compression and 0.793 ± 0.677 mm after restoration, with no statistical differences between operators (p = 0.899 and 0.992). In conclusion, our 4DP-KSG can be used for laparoscopic partial nephrectomies, providing precise and quantitative kidney tumor marking between operators before compression and after restoration.


Assuntos
Neoplasias Renais , Laparoscopia , Humanos , Nefrectomia/métodos , Rim/diagnóstico por imagem , Rim/cirurgia , Rim/patologia , Neoplasias Renais/diagnóstico por imagem , Neoplasias Renais/cirurgia , Laparoscopia/métodos , Impressão Tridimensional
9.
Sci Rep ; 14(1): 4587, 2024 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-38403628

RESUMO

The aim of our study was to assess the performance of content-based image retrieval (CBIR) for similar chest computed tomography (CT) in obstructive lung disease. This retrospective study included patients with obstructive lung disease who underwent volumetric chest CT scans. The CBIR database included 600 chest CT scans from 541 patients. To assess the system performance, follow-up chest CT scans of 50 patients were evaluated as query cases, which showed the stability of the CT findings between baseline and follow-up chest CT, as confirmed by thoracic radiologists. The CBIR system retrieved the top five similar CT scans for each query case from the database by quantifying and comparing emphysema extent and size, airway wall thickness, and peripheral pulmonary vasculatures in descending order from the database. The rates of retrieval of the same pairs of query CT scans in the top 1-5 retrievals were assessed. Two expert chest radiologists evaluated the visual similarities between the query and retrieved CT scans using a five-point scale grading system. The rates of retrieving the same pairs of query CTs were 60.0% (30/50) and 68.0% (34/50) for top-three and top-five retrievals. Radiologists rated 64.8% (95% confidence interval 58.8-70.4) of the retrieved CT scans with a visual similarity score of four or five and at least one case scored five points in 74% (74/100) of all query cases. The proposed CBIR system for obstructive lung disease integrating quantitative CT measures demonstrated potential for retrieving chest CT scans with similar imaging phenotypes. Further refinement and validation in this field would be valuable.


Assuntos
Enfisema Pulmonar , Tomografia Computadorizada por Raios X , Humanos , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Tomografia Computadorizada de Feixe Cônico , Radiologistas
10.
Korean J Radiol ; 25(3): 224-242, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38413108

RESUMO

The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.


Assuntos
Inteligência Artificial , Radiologia , Humanos , Diagnóstico por Imagem , Software , Idioma
11.
J Imaging Inform Med ; 2024 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-38381382

RESUMO

Recent advances in contrastive learning have significantly improved the performance of deep learning models. In contrastive learning of medical images, dealing with positive representation is sometimes difficult because some strong augmentation techniques can disrupt contrastive learning owing to the subtle differences between other standardized CXRs compared to augmented positive pairs; therefore, additional efforts are required. In this study, we propose intermediate feature approximation (IFA) loss, which improves the performance of contrastive convolutional neural networks by focusing more on positive representations of CXRs without additional augmentations. The IFA loss encourages the feature maps of a query image and its positive pair to resemble each other by maximizing the cosine similarity between the intermediate feature outputs of the original data and the positive pairs. Therefore, we used the InfoNCE loss, which is commonly used loss to address negative representations, and the IFA loss, which addresses positive representations, together to improve the contrastive network. We evaluated the performance of the network using various downstream tasks, including classification, object detection, and a generative adversarial network (GAN) inversion task. The downstream task results demonstrated that IFA loss can improve the performance of effectively overcoming data imbalance and data scarcity; furthermore, it can serve as a perceptual loss encoder for GAN inversion. In addition, we have made our model publicly available to facilitate access and encourage further research and collaboration in the field.

13.
Neuro Oncol ; 2024 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-38253989

RESUMO

BACKGROUND: This study evaluated whether generative artificial intelligence-based augmentation (GAA) can provide diverse and realistic imaging phenotypes and improve deep learning-based classification of isocitrate dehydrogenase (IDH) type in glioma compared with neuroradiologists. METHODS: For model development, 565 patients (346 IDH-wildtype, 219 IDH-mutant) with paired contrast-enhanced T1 and FLAIR MRI scans were collected from tertiary hospital and The Cancer Imaging Archive. Performance was tested on internal (119, 78 IDH-wildtype, 41 IDH-mutant [IDH1 and 2]) and external test sets (108, 72 IDH-wildtype, 36 IDH-mutant). GAA was developed using score-based diffusion model and ResNet50 classifier. The optimal GAA was selected in comparison with null model. Two neuroradiologists (R1, R2) assessed realism, diversity of imaging phenotypes, and predicted IDH mutation. The performance of a classifier trained with optimal GAA was compared with that of neuroradiologists using area under the receiver operating characteristics curve (AUC). The effect of tumor size and contrast enhancement on GAA performance was tested. RESULTS: Generated images demonstrated realism (Turing's test: 47.5%-50.5%) and diversity indicating IDH type. Optimal GAA was achieved with augmentation with 110 000 generated slices (AUC: 0.938). The classifier trained with optimal GAA demonstrated significantly higher AUC values than neuroradiologists in both the internal (R1, P=.003; R2, P<.001) and external test sets (R1, P<.01; R2, P<.001). GAA with large-sized tumors or predominant enhancement showed comparable performance to optimal GAA (internal test: AUC 0.956 and 0.922; external test: 0.810 and 0.749). CONCLUSIONS: Application of generative AI with realistic and diverse images provided better diagnostic performance than neuroradiologists for predicting IDH type in glioma.

14.
NPJ Digit Med ; 7(1): 2, 2024 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-38182886

RESUMO

The treatment decisions for patients with hepatocellular carcinoma are determined by a wide range of factors, and there is a significant difference between the recommendations of widely used staging systems and the actual initial treatment choices. Herein, we propose a machine learning-based clinical decision support system suitable for use in multi-center settings. We collected data from nine institutions in South Korea for training and validation datasets. The internal and external datasets included 935 and 1750 patients, respectively. We developed a model with 20 clinical variables consisting of two stages: the first stage which recommends initial treatment using an ensemble voting machine, and the second stage, which predicts post-treatment survival using a random survival forest algorithm. We derived the first and second treatment options from the results with the highest and the second-highest probabilities given by the ensemble model and predicted their post-treatment survival. When only the first treatment option was accepted, the mean accuracy of treatment recommendation in the internal and external datasets was 67.27% and 55.34%, respectively. The accuracy increased to 87.27% and 86.06%, respectively, when the second option was included as the correct answer. Harrell's C index, integrated time-dependent AUC curve, and integrated Brier score of survival prediction in the internal and external datasets were 0.8381 and 0.7767, 91.89 and 86.48, 0.12, and 0.14, respectively. The proposed system can assist physicians by providing data-driven predictions for reference from other larger institutions or other physicians within the same institution when making treatment decisions.

15.
Cancers (Basel) ; 16(2)2024 Jan 19.
Artigo em Inglês | MEDLINE | ID: mdl-38275871

RESUMO

Lymphovascular invasion (LVI) is one of the most important prognostic factors in gastric cancer as it indicates a higher likelihood of lymph node metastasis and poorer overall outcome for the patient. Despite its importance, the detection of LVI(+) in histopathology specimens of gastric cancer can be a challenging task for pathologists as invasion can be subtle and difficult to discern. Herein, we propose a deep learning-based LVI(+) detection method using H&E-stained whole-slide images. The ConViT model showed the best performance in terms of both AUROC and AURPC among the classification models (AUROC: 0.9796; AUPRC: 0.9648). The AUROC and AUPRC of YOLOX computed based on the augmented patch-level confidence score were slightly lower (AUROC: -0.0094; AUPRC: -0.0225) than those of the ConViT classification model. With weighted averaging of the patch-level confidence scores, the ensemble model exhibited the best AUROC, AUPRC, and F1 scores of 0.9880, 0.9769, and 0.9280, respectively. The proposed model is expected to contribute to precision medicine by potentially saving examination-related time and labor and reducing disagreements among pathologists.

16.
Comput Methods Programs Biomed ; 245: 108002, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38215659

RESUMO

BACKGROUND AND OBJECTIVES: Although magnetic resonance imaging (MRI) is commonly used for breast tumor detection, significant challenges remain in determining and presenting the three-dimensional (3D) morphology of tumors to guide breast-conserving surgery. To address this challenge, we have developed the augmented reality-breast surgery guide (AR-BSG) and compared its performance with that of a traditional 3D-printed breast surgical guide (3DP-BSG). METHODS: Based on the MRI results of a breast cancer patient, a breast phantom made of skin, body, and tumor was fabricated through 3D printing and silicone-casting. AR-BSG and 3DP-BSG were executed using surgical plans based on the breast phantom's computed tomography scan images. Three operators independently inserted a catheter into the phantom using each guide. Their targeting accuracy was then evaluated using Bland-Altman analysis with limits of agreement (LoA). Differences between the users of each guide were evaluated using the intraclass correlation coefficient (ICC). RESULTS: The entry and end point errors associated with AR-BSG were -0.34±0.68 mm (LoA: -1.71-1.01 mm) and 0.81±1.88 mm (LoA: -4.60-3.00 mm), respectively, whereas 3DP-BSG was associated with entry and end point errors of -0.28±0.70 mm (LoA: -1.69-1.11 mm) and -0.62±1.24 mm (LoA: -3.00-1.80 mm), respectively. The AR-BSG's entry and end point ICC values were 0.99 and 0.97, respectively, whereas 3DP-BSG was associated with entry and end point ICC values of 0.99 and 0.99, respectively. CONCLUSIONS: AR-BSG can consistently and accurately localize tumor margins for surgeons without inferior guiding accuracy AR-BSG can consistently and accurately localize tumor margins for surgeons without inferior guiding accuracy compared to 3DP-BSG. Additionally, when compared with 3DP-BSG, AR-BSG can offer better spatial perception and visualization, lower costs, and a shorter setup time.


Assuntos
Realidade Aumentada , Neoplasias da Mama , Cirurgia Assistida por Computador , Humanos , Feminino , Mastectomia Segmentar , Tomografia Computadorizada por Raios X/métodos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/cirurgia , Imagens de Fantasmas , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Impressão Tridimensional
18.
Korean J Orthod ; 54(1): 48-58, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38072448

RESUMO

Objective: : To quantify the effects of midline-related landmark identification on midline deviation measurements in posteroanterior (PA) cephalograms using a cascaded convolutional neural network (CNN). Methods: : A total of 2,903 PA cephalogram images obtained from 9 university hospitals were divided into training, internal validation, and test sets (n = 2,150, 376, and 377). As the gold standard, 2 orthodontic professors marked the bilateral landmarks, including the frontozygomatic suture point and latero-orbitale (LO), and the midline landmarks, including the crista galli, anterior nasal spine (ANS), upper dental midpoint (UDM), lower dental midpoint (LDM), and menton (Me). For the test, Examiner-1 and Examiner-2 (3-year and 1-year orthodontic residents) and the Cascaded-CNN models marked the landmarks. After point-to-point errors of landmark identification, the successful detection rate (SDR) and distance and direction of the midline landmark deviation from the midsagittal line (ANS-mid, UDM-mid, LDM-mid, and Me-mid) were measured, and statistical analysis was performed. Results: : The cascaded-CNN algorithm showed a clinically acceptable level of point-to-point error (1.26 mm vs. 1.57 mm in Examiner-1 and 1.75 mm in Examiner-2). The average SDR within the 2 mm range was 83.2%, with high accuracy at the LO (right, 96.9%; left, 97.1%), and UDM (96.9%). The absolute measurement errors were less than 1 mm for ANS-mid, UDM-mid, and LDM-mid compared with the gold standard. Conclusions: : The cascaded-CNN model may be considered an effective tool for the auto-identification of midline landmarks and quantification of midline deviation in PA cephalograms of adult patients, regardless of variations in the image acquisition method.

19.
Orthod Craniofac Res ; 27(1): 64-77, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37326233

RESUMO

BACKGROUND: This study aimed to assess the error range of cephalometric measurements based on the landmarks detected using cascaded CNNs and determine how horizontal and vertical positional errors of individual landmarks affect lateral cephalometric measurements. METHODS: In total, 120 lateral cephalograms were obtained consecutively from patients (mean age, 32.5 ± 11.6) who visited the Asan Medical Center, Seoul, Korea, for orthodontic treatment between 2019 and 2021. An automated lateral cephalometric analysis model previously developed from a nationwide multi-centre database was used to digitize the lateral cephalograms. The horizontal and vertical landmark position error attributable to the AI model was defined as the distance between the landmark identified by the human and that identified by the AI model on the x- and y-axes. The differences between the cephalometric measurements based on the landmarks identified by the AI model vs those identified by the human examiner were assessed. The association between the lateral cephalometric measurements and the positioning errors in the landmarks comprising the cephalometric measurement was assessed. RESULTS: The mean difference in the angular and linear measurements based on AI vs human landmark localization was .99 ± 1.05°, and .80 ± .82 mm, respectively. Significant differences between the measurements derived from AI-based and human localization were observed for all cephalometric variables except SNA, pog-Nperp, facial angle, SN-GoGn, FMA, Bjork sum, U1-SN, U1-FH, IMPA, L1-NB (angular) and interincisal angle. CONCLUSIONS: The errors in landmark positions, especially those that define reference planes, may significantly affect cephalometric measurements. The possibility of errors generated by automated lateral cephalometric analysis systems should be considered when using such systems for orthodontic diagnoses.


Assuntos
Face , Redes Neurais de Computação , Humanos , Adulto Jovem , Adulto , Cefalometria , Radiografia , Reprodutibilidade dos Testes
20.
Sci Rep ; 13(1): 20976, 2023 11 28.
Artigo em Inglês | MEDLINE | ID: mdl-38017064

RESUMO

Conventional suture anchors (CAs) and all-suture anchors (ASAs) are used for rotator cuff repair. Pull-out strength (POS) is an important factor that affects surgical outcomes. While the fixation mechanism differs between the anchor types and relies on the quality, few studies have compared biomechanical properties of anchors based on bone quality. This study aimed to compare the biomechanical properties of anchors using osteoporotic bone (OB) and non-osteoporotic bone (NOB) simulators. Humerus simulators were fabricated using fused deposition modeling of 3D printing and acrylonitrile butadiene styrene adjusting the thickness of cortical bone and density of cancellous bone based on CT images. Cyclic loading from 10 to 50 N, 10 to 100 N, and 10 to 150 N for 10 cycles was clinically determined at each anchor because the supraspinatus generates a force of 67-125 N in daily activities of normal control. After cyclic loading, the anchor was extruded at a load of 5 mm/min. Displacement, POS, and stiffness were measured. In OB simulators, CAs revealed bigger gap displacement than ASAs with cyclic loading of 10-150 N. ASA showed higher values for POS and stiffness. In NOB simulators, ASAs revealed bigger gap displacement than CAs with cyclic loading of 10-150 N. ASA showed higher values for POS and CA showed higher values for stiffness. POS of anchors depends on anchors 'displacement and bone stiffness. In conclusion, ASA demonstrated better biomechanical performance than CA in terms of stability under cyclic loading and stiffness with similar POS in OB.


Assuntos
Lesões do Manguito Rotador , Âncoras de Sutura , Humanos , Fenômenos Biomecânicos , Lesões do Manguito Rotador/cirurgia , Úmero/cirurgia , Suturas , Técnicas de Sutura , Cadáver , Impressão Tridimensional
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA